home

Editorial
Today's News
News Archives
On-line Articles
Current Issue
Current Abstracts
Magazine Archives
Subscribe to ISD


Directories:
Vendor Guide 2001
Advertiser Index
EDA Web Directory
Literature Guide
Event Calendar


Resources:
Resources and Seminars
Special Sections
High-tech Job Search


Information:
2001 Media Kit
About isdmag.com
Writers Wanted!
Search isdmag.com
Contact Us





Equivalence Checking for SoC Custom Blocks

As custom blocks become increasingly important for SOCs, equivalence checking between transistor-level implementation and the behavioral model used in design takes on new signficance.

By Simon Napper


Equivalence checking (RTL to gate and gate to gate) has rapidly become the standard for language-based designs. Unfortunately, this approach is not practical for custom designers because the models are neither gate level nor synthesizable RTL (register transfer level). Recently, the availability of symbolic simulation technology that supports both transistor-level and behavioral-level constructs has made a way for custom designers to have equivalence checking of their own.

Formal verification techniques have proven to be one of the most promising areas of development over the last five years in electronic design automation (EDA). Formal verification deals with a critical problem (verification) and offers enormous increase in functional coverage while greatly reducing the time to achieve that coverage. One key application has been equivalence checking whereby two designs are compared for equivalence. To date, the focus has been on gate-to-gate, RTL-to-gate, and RTL-to-RTL equivalence checking.

Generally speaking, there are a well-known set of requirements needed to enable the equivalence checking to work:

  • Synthesizable RT or gate-level description

  • Synchronous design

  • State point mapping between the two descriptions.

Within these requirements, equivalence checkers are powerful, easy to use, and have been remarkably successful. They have needed to fill a gap in the synthesis driven design methodologies and have eliminated weeks of simulation time.

For system-on-a-chip (SOC) design, there is a general recognition that methods other than increasing clock speed have to be found to boost system performance. Power and thermal issues limit clock rate and are difficult to overcome. A key part of the solution is to increase integration and, in particular, increase on-chip memory. This is driving the development of sophisticated custom memories to meet the needs of high-performance SOC designs.

Designers of custom memories, and other blocks, require equivalence checking for several reasons. They provide a behavioral model to their SOC customers that have to be equivalent to the circuit implementation. These embedded memories have significant logic surrounding them and there are many opportunities for hard-to-track bugs. The preferred simulation environment (Spice or Fastspice) is likely to result in longer runtimes than the language-based flows. And, there is no natural way to compare the output of a Spice-based tool with a reference model.

The custom design community, however, has been left out in the cold by existing equivalence checking tools. First, the reference models are not synthesizable as there was never any intention to synthesize the model. Second, the designs are not necessarily synchronous and there are complex clocking schemes which current equivalence checkers don't support. Finally, there is no state-point mapping.

With recent advancements in formal verification technologies, however, it's now possible to provide an equivalence checking capability targeted at the custom circuit designer who is developing IP or cores for SOC design.

Enabler semi-formal symbolic simulation

The advancement in formal verification techniques has made a significant contribution to design verification. Initially, formal verification was focused on equivalence checking and model checking. Today, there is increasing deployment of formal verification into new areas. One such method is semi-formal verification. Semi-formal verification trades off ease-of-use with completeness. Semi-formal may not provide 100 percent coverage in all cases, but it has higher capacity and is easier to understand and use.

Symbolic simulation is a good example. Symbolic simulation was first seen around 1986 and has been used by a few major semiconductor companies for a number of years. Recently, symbolic simulation has become commercially available.

Symbolic simulation is event-driven simulation that adds a symbol or variable to the input set. Whereas you used to have 0, 1, X, Z-you can now add S. In the past, the simulator would propagate 0, 1, X, Z. Today, a symbolic simulator can propagate 0, 1, X, Z and an expression.

Consider the example of an alarm that sounds an alert when Door (D) or Window (W) are open and Alarm (W) is set (see Figure 1). Existing simulators take eight vectors to fully verify the design. Symbolic simulation requires only one. Instead of explicit values, logic expressions are propagated, capturing the relationship from input to output. This allows the simulator to verify the complete behavior in a very efficient manner. The underlying appeal to this technology is that the equivalent binary coverage is 2-symbol inputs. If you use 10 inputs, that is roughly 1000X coverage. If you use 100 inputs, that's roughly 1030X coverage. Custom equivalence checking takes advantage of this coverage.

Custom equivalence checking differs from current equivalence checking in the way that it operates, but still enables the custom designer to verify a transistor-level design against a behavioral reference model. Custom equivalence checking relies on two key elements:

  • A unified simulation environment that allows you to compare the output of a transistor level model with a behavioral reference.

  • Symbolic input propagation to provide complete functional coverage.

Custom equivalence checking works by connecting a transistor-level model device under test (DUT) and its reference model (REF) such that they receive identical inputs and the outputs are compared. In that sense, it's similar to traditional verification methods, except that instead of feeding in explicit (1/0) values, symbols (variable values) are fed in. Using this method, it's possible to provide complete functional verification space using a tiny fraction of the time required for traditional methods (see Figure 2).

The four steps in the equivalence flow are as shown (see Figure 3):

  • Read the Spice netlist and build a Verilog switch-level model of the memory. This will require setting correct transistor directions (or leaving the transistors bi-directional), identifying weak transistors that are involved in "fighting conditions", inserting delays in self-timed circuits such as clock generators, and swapping out analog components with behavioral models.

  • Prepare the constraints; constrain the inputs -are the control signals 1-hot, mutually exclusive, active high, active low? Constrain the output checking to when output data is valid.

  • Run the comparison and see if there are any differences between the switch and reference models.

  • Debug and fix any differences.

The result is that you are able to verify a transistor-level implementation (for example an SRAM) against a behavioral reference model. Instead of running weeks of simulation for partial coverage, you'll be able to achieve exhaustive coverage in a few hours.

Memory examples

The two ends of the verification complexity spectrum can be examined by looking at two types of memory: an SRAM and an SDRAM. An SRAM is conceptually simple. Whether or not you would want to use an equivalence checker for such a simple memory is a valid question.

Although conceptually simple, increasingly today, to maximize performance, SRAM's are structurally very complex-there are plenty of opportunities for hard to find errors to creep in. The other end of the spectrum is an SDRAM, a large memory with a complex sequencer. Therefore, not only is data-integrity verification required, but also sequence integrity. Sequence integrity examines all possible legal (and illegal, if required) sequences to see whether the sequencers (there are four in most SDRAM) get locked and stop the memory functioning.

Single port SRAM: SRAM verification requires data-integrity verification. In other words, if I write any data to any address and then read it back, will it be faithfully retrieved? The corollary is also required: If I write any data to any address and read back data from any other address (in other words, not the one I just wrote to), will the previous write in any way impact the current read? In this case, the steps are:

  • Read in the Spice netlist

  • Prepare the comparison constraints

  • Execute the comparison

  • Debug differences

Without a doubt, the most laborious task is reading the Spice netlist into an HDL format. We are relying on the HDL (Verilog for example) to enable us to verify the transistor level against a behavioral reference. However, HDL simulators are digital and so data can get lost in the translation from Spice.

The information that has to be managed spans a range of factors.

Weak transistor identification is required where there are fighting conditions. This tends to be in the word- and bit-lines, latches, and the core-memory cells.

Delays for self timed circuits-clock generators rely on small delays to generate pulses. By default, HDL simulators normally run zero delay and you have to insert delays manually. They don't need to be accurate, but enough to allow a pulse to be generated and propagated.

Replace analog components with behavioral models. HDL simulators are digital and considers a transistor as a switch. Anything that relies on the linear region of the transistor has to be modeled behaviorally. Additionally, you may choose to use behavioral models for blocks that you are confident are functionally correct (for example, sense amps). This will improve runtime performance.

Remove transistors that don't impact functionality (for example, biasing transistors). Initialization-occasionally, there are times where initial conditions need to be defined. There is no avoiding the time needed to do all of this. There is a need for utilities to manage the Spice to HDL translation process.

Typically it can take from two hours to two days, depending on the complexity of the memory (see Figure 4). Normally, there is a translation environment that reads a configuration file along with the Spice netlist and any model files, which would hold behavioral models of analog components.

You should also expect this to be a main area of focus for future automation that will reduce the amount of intervention needed.

Preparing the comparison constraints

The default equivalence checking process comprises six cycles (see Table 1, page 30). If required, you can extend the number of sequences. Normally, it is sufficient to tailor each sequence.

The first three cycles allow you to initialize the models and verify that they perform a read and write correctly. The symbolic cycles cover both read and write, and thus cover all possibilities from three writes to three reads.

Even if your application isn't a memory, the same principles apply-they are not reads and writes. You start with block initialization, have a couple of binary cycles to verify that everything is working and then a series of symbolic cycles which allows you to cover the full functional space.

The tailoring of the constraints can be done manually, but it is much simpler if you have a utility to set the conditions up by reading the models you are using and managing the housekeeping.

Despite this effort, you will have to make sure that the constraints that you have written are correct. This is a key issue. Semi-formal verification provides thorough exploration of whatever you direct it to look it. It's vulnerable, therefore, to GIGO (garbage in garbage out).

By this time, you have more or less completed all of the manual work. The equivalence checker runs automatically and generates a counter example of any differences. You can run the equivalence checker and confirm that the read and write cycles run properly (in other words, the checker reads back from both REF and DUT what it wrote in the previous cycle), and then the symbolic cycles will explore the complete functional space.

The log file will tell you if there are any differences, which is where the power of this type of equivalence checking becomes clear.

You have executed six cycles with a number of inputs being symbols or variables. If there are any differences, an error case file will be generated. This error file contains explicit values for the variables such that the error case will be recreated.

Now, all you have to do is to re-run the check with explicit values. This will create a meaningful waveform trace that will allow you to pinpoint and fix the error.

It's tantamount to a "search and rescue" operation. Symbols sweep out over the whole space, searching for errors. When they are found, an explicit set of values are created that will lead you directly to the error.

There can be a wide range of run-times depending on the size and complexity of the memory.

In the table below, the longest run-time is under ten minutes (see Table 2). Realistically, run-times can stretch from 30 minutes to a few hours. It is unusual to see run-times go beyond that.

Simple memories (memories without sequences) demonstrate the investment required, as well as the power of equivalence checking.

As equivalence checking relies on using an HDL environment where a behavioral and transistor models can co-exist, you have to be prepared for the step of moving from Spice to switch-level Verilog (for example).

Once there, equivalence checking provides coverage and runtime benefits as well as an easy debug flow.

256MB SDRAM: An SDRAM is a memory with complex sequencing. As such, it explores the capacity limits of the equivalence checker as well as the need to verify the command sequences.

SDRAM blocks are normally huge and it is necessary to partition the verification. The SDRAM is partitioned into the memory banks and the sequencers. The first task is to verify that the memory banks store and retrieve data. The second task is to verify that the sequencers work correctly for any legal control sequence.

An SDRAM is a high capacity memory controlled by certain sequences. Each bank has an independent state machine that interacts with the other state machines. Verification of the SDRAM involves verifying both the memory and the state machines and their interactions.

The first test is data integrity. This is where the SDRAM is treated as if it were a large memory. Known, legal read-and-write sequences are used with symbolic data and address. This proves that if the control sequences are legal, the memory will store and retrieve any data to and from any address, as we covered in the SRAM example above.

The second test is sequence integrity. This is where all legal sequences are explored using a small subset of address and data pairs. In other words, is it possible to have legal interaction between the independent state machines such that the memory ceases to work correctly?

SDRAM sequence verification

The approach to sequence verification is to inject legal symbolic commands. For example, if the state machine is in IDLE, the legal commands are ACT (row activate), REF (refresh), MRS (Mode Register set), and NOP (No Operation). At the beginning of each cycle, we will determine which bank is going to be accessed (we get this from the bank address). For that bank, we will determine the current state and for that state, we will generate a symbolic input that covers all legal states.

As the input command is symbolic, it covers all possible commands in one cycle. If the bank address is a symbol, it will result in all possible legal commands being generated for all possible banks.

We step through this process 16 times to cover virtually all possible interactions between all the state machines.

To make this type of verification possible, we need a white box model that allows us to explore the internal state of the memory. This can be done with an external model as shown below, or by adding this capability to your reference model (see Figure 6).

The SDRAM verification environment automatically generates legal symbolic input that covers all possible combinations of input values. Having an external white-box model provides the added appeal of being able to independently verify the reference and switch level models.

The verification environment will not initialize your models; that is something for which you need to supply the information. However, once all the models are initialized, the verification environment steps through the sequences and reports when there are differences.

Details of verification

SDRAM verification may not be of general interest. However, the technique for verification is important as it represents a sequence-dominated verification that will be common in many types of memory and other designs.

For the sequence verification, the address and data are largely explicit values. We have run data integrity and we have verified that the memory works if the commands are legal. What we explore here is whether it is possible to have a sequence of commands that will cause the state machines to lock up. Think of it is as a set of nested case statements.

Top level case statement

case (bank_address)

2'b00: active = bank0;

2'b01: active = bank1;

2'b10: active = bank2;

2'b11: active = bank3;

endcase

legal_command(active);

This tells which bank is active and which one will generate the next command and then runs a task that will generate all the legal commands for the active bank. Remember that if bank_address is a symbol, that will cover all banks.

legal_command now creates the sequences. We can't cover the complete state machine, so we will cover the case we mentioned above. Suppose it is in IDLE.

IDLE: begin

case (next_fork)

2'b00: next_command = NOP;

2'b01: next_command = REF;

2'b10: next_command = MRS;

2'b11: next_command = ACT;

endcase

ACTV: begin

.....

This is applying the same technique for the banks. We have a value next_fork. If that were an explicit value, it would pick one of the four options. If we make next_fork a symbol, it covers all paths.

This is a simplified version of the task because there are some previous states that might pre-empt a value. (For example, if another bank is in a certain state, REF becomes illegal.) This is where the white box provides that information.

The key to the approach is actually very simple. You proceed through the tests imagining that, if you were using explicit values, you would pick one of many (one bank out of four and then one command out of four). Then with symbols, you naturally cover all possible combinations.

It's also very important that, when there are differences, the equivalence checker generates explicit values, so that debugging is made much simpler.

The size of the memory-our tests were on a 256MB SDRAM) stresses the capacity of the equivalence checker.

For data integrity, we executed two symbolic writes followed by two symbolic reads. SDRAMs have the capacity for burst reads and writes of almost arbitrary length. We restricted this to bursts of four.

For sequence integrity, we were limited to 16 cycles with explicit data and mainly explicit addresses. However, as we weren't concerned about data integrity, that was sufficient.

For data integrity, the runtimes range from 30 minutes to 3 hours depending on the type of machine. For sequence integrity, the runtimes range from 1 to 4 hours. Both of these are dependent on the complexity of the reference model and the detail of the switch level model.

The most interesting aspect of the SDRAM is sequence integrity. It demonstrates how you can construct a very complex series of tests without having to consider all the strange corner cases. By building a test that, if it were driven by explicit values, randomly sample the functional space, you can leverage symbols to sweep over everything.

Growing demand

Meeting the market needs for performance, cost, size, and power means that SOCs have to use the most sophisticated blocks and tools. It seems clear that advanced memory structures and other custom blocks will be increasingly important to SOC performance. This will drive the demand for behavioral models that SOC designers can use to develop their chips. This, in turn, will fuel the demand for behavioral models that have been certified as equivalent to the custom implementation.

The convergence of modern SOC design demands and new formal verification technology provides the need for-and the solution to-equivalence checking for custom designers. This emerging technology will provide freedom for the circuit designers and security for the SOC consumer.


Simon Napper is CEO at InnoLogic Systems, Inc. (San Jose). Prior to InnoLogic, Napper was at EPIC Design Technology (now Synopsys) and LSI Logic.

To voice an opinion on this or any other article in Integrated System Design, please e-mail your comments to sdean@cmp.comd

Sponsor Links

All material on this site Copyright © 2000 CMP Media Inc. All rights reserved.